Goto

Collaborating Authors

 Social Sector


The ecosystem of machine learning competitions: Platforms, participants, and their impact on AI development

Nasios, Ioannis

arXiv.org Machine Learning

Machine learning competitions (MLCs) play a pivotal role in advancing artificial intelligence (AI) by fostering innovation, skill development, and practical problem-solving. This study provides a comprehensive analysis of major competition platforms such as Kaggle and Zindi, examining their workflows, evaluation methodologies, and reward structures. It further assesses competition quality, participant expertise, and global reach, with particular attention to demographic trends among top-performing competitors. By exploring the motivations of competition hosts, this paper underscores the significant role of MLCs in shaping AI development, promoting collaboration, and driving impactful technological progress. Furthermore, by combining literature synthesis with platform-level data analysis and practitioner insights a comprehensive understanding of the MLC ecosystem is provided. Moreover, the paper demonstrates that MLCs function at the intersection of academic research and industrial application, fostering the exchange of knowledge, data, and practical methodologies across domains. Their strong ties to open-source communities further promote collaboration, reproducibility, and continuous innovation within the broader ML ecosystem. By shaping research priorities, informing industry standards, and enabling large-scale crowdsourced problem-solving, these competitions play a key role in the ongoing evolution of AI. The study provides insights relevant to researchers, practitioners, and competition organizers, and includes an examination of the future trajectory and sustained influence of MLCs on AI development.



Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation

Neural Information Processing Systems

T o this end, we propose a simple yet powerful paradigm for seamlessly unifying different human pose and shape-related tasks and datasets. Our formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume, and obtain its estimated location in 3D. We achieve this by learning a continuous neural field of body point localizer functions, each of which is a differently parameterized 3D heatmap-based convolutional point localizer (detector).



Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering Dongxiao He

Neural Information Processing Systems

Graph Contrastive Learning (GCL) has emerged as a powerful approach for generating graph representations without the need for manual annotation. Most advanced GCL methods fall into three main frameworks: node discrimination, group discrimination, and bootstrapping schemes, all of which achieve comparable performance. However, the underlying mechanisms and factors that contribute to their effectiveness are not yet fully understood.




A Potential Negative Societal Impacts

Neural Information Processing Systems

In addition, users may become overly dependent on the model's outputs For the feedback, we ask the person "Please consider the quality of the Given a score (1-5). 1 means its quality is bad, and 5 means its quality is very good". The interface of the user study is shown in Fig. A1. We report the average scores in Tab. We have a total of 1.1M training data in FIRE. In Fig. A2, we present the curves of A T, A TR, A TR, and RR using different Results show that more data leads to better performance.


A Limitations and Societal Impacts

Neural Information Processing Systems

Limitations One limitation of our model is its potential for data bias. This could limit the applications of the model. MLLMs could be used to create fake news articles or social media posts. Hyperparameters Number of layers 24 Hidden size 2,048 FFN inner hidden size 8,192 Attention heads 32 Dropout 0.1 Attention dropout 0.1 Activation function GeLU [1] V ocabulary size 64,007 Soft tokens V size 64 Max length 2,048 Relative position embedding xPos [2] Initialization Magneto [3] Table 1: Hyperparameters of causal language model of K The detailed instruction tuning hyperparameters are listed in Table 3. The models are trained on web-scale multimodal corpora.